Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Bengio"


25 mentions found


On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, announced that this year’s Turing Award will go to Avi Wigderson, an Israeli-born mathematician and theoretical computer scientist who specializes in randomness. Often called the Nobel Prize of computing, the Turing Award comes with a $1 million prize. The award is named for Alan Turing, the British mathematician who helped create the foundations for modern computing in the mid-20th century. Other recent winners include Ed Catmull and Pat Hanrahan, who helped create the computer-generated imagery, or C.G.I., that drives modern movies and television, and the A.I. researchers Geoffrey Hinton, Yann LeCun and Yoshua Bengio, who nurtured the techniques that gave rise to chatbots like ChatGPT.
Persons: Turing, Avi Wigderson, Alan Turing, Ed Catmull, Pat Hanrahan, Geoffrey Hinton, Yann LeCun, Yoshua Bengio Organizations: Association for Computing Machinery Locations: Israeli, British
OpenAI and Meta are close to unveiling AI models that can reason and plan, the FT reported. AdvertisementOpenAI and Meta are reportedly preparing to release more advanced AI models that would be able to help problem-solve and take on more complex tasks. Representatives for Meta and OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours. Getting AI models to reason and plan is an important step toward achieving artificial general intelligence (AGI), which both Meta and OpenAI have claimed to be aiming for. Elon Musk, a longtime AI skeptic, recently estimated that AI would outsmart humans within two years.
Persons: , Brad Lightcap, Joelle Pineau, OpenAI, John Carmack, Bengio, Geoffrey Hinton, Elon Musk, Musk Organizations: Meta, Service, Financial Times, Business
However, overemphasizing the dangers of AI risks paralyzing debate at a pivotal moment. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . Advertisement"I'm not scared of A.I.," LeCun told the magazine. While Hinton and Meta's chief AI scientist LeCun have butted heads, fellow collaborator and third AI godfather Yoshua Bengio has stressed that this unknown is the real issue.
Persons: what's, Geoffrey Hinton, , Hinton, Yan LeCun, Turing, LeCun, Yoshua Bengio, Yann, Joshua Rothman, it's Organizations: Service, Big Tech, Google, Yorker Locations: Hinton, Canadian
An AI godfather says we should all be worried about the concentration of power in the AI sector. Bengio said the control of powerful AI systems was a central question for democracy. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementThe concentration of power in the AI arena is one of the main risks facing the industry, an AI godfather says. Regulation, at least in its current form, will not be the boost for big tech companies that some industry experts have suggested it could be, he added.
Persons: Yoshua Bengio, Bengio, , Yoshua, I've, Yann LeCun, OpenAI's Sam Altman, LeCun, Anthropic's Dario Amodei, Benigo Organizations: Service Locations: Canadian, ChatGPT
China's delegate to the meeting, Vice Minister of Science and Technology Wu Zhaohui, was present on Thursday, his ministry said on Friday. The Chinese technology ministry declined to say why China did not agree to the proposal, which was about AI model testing. British Prime Minister Rishi Sunak chaired Thursday's meeting that comprised "a small group of like-minded senior representatives from governments around the world", Britain said, including the U.S. vice president and the EC president. Some British lawmakers had criticised China's participation in the inaugural AI summit. Sunak told reporters: "Some said we shouldn't even invite China, others said we would never get an agreement with them.
Persons: Ursula von der Leyen, Kamala Harris, Rishi Sunak, Giorgia Meloni, Antonio Guterres, Yoshua Bengio, Mila, Microsoft Brad, Technology Wu Zhaohui, Wu, Oliver Dowden, Sunak, Paul Sandle, Brenda Goh, Alistair Smout, Cynthia Osterman Organizations: Italy's, UN, Quebec AI Institute, Microsoft, Safety, Science, Technology, Bloomberg, U.S, European Union, Thomson Locations: British, SHANGHAI, LONDON, China, Britain, Beijing, Bletchley Park, England, United States, Bletchley, London, Shanghai
British Prime Minister Rishi Sunak attends an in-conversation event with Tesla and SpaceX's CEO Elon Musk in London, Britain, Thursday, Nov. 2, 2023. Risks around rapidly-developing AI have been an increasingly high priority for policymakers since Microsoft-backed Open AI (MSFT.O) released ChatGPT to the public last year. "It was fascinating that just as we announced our AI safety institute, the Americans announced theirs," said attendee Nigel Toon, CEO of British AI firm Graphcore. China’s vice minister of science and technology said the country was willing to work with all sides on AI governance. Yoshua Bengio, an AI pioneer appointed to lead a "state of the science" report commissioned as part of the Bletchley Declaration, told Reuters the risks of open-source AI were a high priority.
Persons: Rishi Sunak, Tesla, Elon Musk, Kirsty Wigglesworth, Sam Altman, Kamala Harris, Ursula von der Leyen, China –, Sunak, Finance Bruno Le Maire, Vera Jourova, Jourova, Harris, Nigel Toon, Wu Zhaohui, Musk, you’ve, Martin Coulter, Paul Sandle, Matt Scuffham, Louise Heavens Organizations: British, Elon, U.S, European Commission, Microsoft, of, Finance, EU, Reuters, Thomson Locations: London, Britain, China, Bletchley, U.S, South Korea, France, United States
AI godfather Yoshua Bengio says the risks of AI should not be underplayed. His remarks come after Meta's Yann LeCun accused Bengio and AI founders of "fear-mongering." AdvertisementAdvertisementClaims by Meta's chief AI scientist, Yann LeCun, that AI won't wipe out humanity are dangerous and wrong according to one of his fellow AI godfathers. AdvertisementAdvertisement"If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote. "Existential risk is one problem but the concentration of power, in my opinion, is the number two problem," he said.
Persons: Yoshua Bengio, Bengio, Meta's Yann LeCun, , Yann LeCun, Yann, LeCun, overstating, Andrew Ng, Geoffrey Hinton, Hinton Organizations: Service, Bell Labs, Google Locations: Bengio
So-called frontier AI refers to the latest and most powerful systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. One of Sunak’s major goals is to get delegates to agree on a first-ever communique about the nature of AI risks. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first. A White House official gave details of Harris’s speech, speaking on condition of anonymity to discuss her remarks in advance.
Persons: Google's Bard, Rishi Sunak's, Kamala Harris, who’s, Elon Musk, Ursula von der Leyen, Yoshua, Sunak, Harris, Biden’s, Jill Lawless Organizations: , British, Safety, U.S, White, Associated Locations: BLETCHLEY, England, London, China, Bletchley
Andrew Ng, formerly of Google Brain, said Big Tech is exaggerating the risk of AI wiping out humans. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementSome of the biggest figures in artificial intelligence are publicly arguing whether AI is really an extinction risk, after AI scientist Andrew Ng said such claims were a cynical play by Big Tech. Andew Ng , a cofounder of Google Brain, suggested to The Australian Financial Review that Big Tech was seeking to inflate fears around AI for its own benefit. — Geoffrey Hinton (@geoffreyhinton) October 31, 2023Meta's chief AI scientist Yann LeCun , also known as an AI godfather for his work with Hinton, sided with Ng.
Persons: Andrew Ng, OpenAI's Sam Altman, , Andew Ng, Ng, It's, Elon Musk, Sam Altman, DeepMind, Demis Hassabis, Googler Geoffrey Hinton, Yoshua, godfathers, — Geoffrey Hinton, Yann LeCun, Hinton, LeCun, Meredith Whittaker, Whittaker Organizations: Google, Big Tech, AI's, Service, Australian Financial Locations: Hinton, British, Canadian, @geoffreyhinton
Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity. The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. In a speech last week, Sunak said only governments — not AI companies — can keep people safe from the technology’s risks. Frontier AI is shorthand for the latest and most powerful systems that go right up to the edge of AI’s capabilities. That makes frontier AI systems “dangerous because they’re not perfectly knowledgeable,” Clune said.
Persons: , Rishi Sunak, It’s, Kamala Harris, Ursula von der Leyen, Google’s, Alan Turing, Sunak, , Jeff Clune, Clune, Elon, Sam Altman, He’s, Joe Biden, Geoffrey Hinton, Yoshua, ” Clune, , it's, Francine Bennett, Ada Lovelace, Deb Raji, ” Raji, it’s, shouldn’t, Raji, DeepMind, Anthropic, Dario Amodei, Jack Clark, , Carsten Jung, Jill Lawless Organizations: British, U.S, European, University of British, AI Safety, European Union, Clune, Ada, Ada Lovelace Institute, House, University of California, ” Tech, Microsoft, Institute for Public Policy Research, Regulators, Associated Press Locations: Bletchley, University of British Columbia, State, EU, Brussels, China, U.S, Beijing, London, Berkeley
Meta's Yann LeCun thinks tech bosses' bleak comments on AI risks could do more harm than good. Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone. https://t.co/Zv1rvOA3Zz — Max Tegmark (@tegmark) October 29, 2023LeCun says founder fretting is just lobbyingSince the launch of ChatGPT , AI's power players have become major public figures. The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape. For LeCun, keeping AI development closed is a real reason for alarm.
Persons: Meta's Yann LeCun, , Yann LeCun, Sam Altman, Anthropic's Dario Amodei, Altman, Hassabis, LeCun, Amodei, LeCun's, Max Tegmark, Turing, Hinton, Russell, Tegmark, I'd, fretting, Elon Musk, OpenAI's, OpenAI Organizations: Service, Google, Hassabis, Research, Meta Locations: Bengio, West Coast, China
The letter, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. Currently there are no broad-based regulations focusing on AI safety, and the first set of legislations by the European Union is yet to become law as lawmakers are yet to agree on several issues. "It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken," he said. Since the launch of OpenAI's generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems. "There are more regulations on sandwich shops than there are on AI companies."
Persons: Dado Ruvic, Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, Yuval Noah Harari, Elon Musk, Stuart Russell, Supantha Mukherjee, Miral Organizations: REUTERS, Rights, Safety, European, Elon, Thomson Locations: Rights STOCKHOLM, London, European Union, British, Stockholm
DeepMind's Mustafa Suleyman recently talked about setting boundaries on AI with the MIT Tech Review. "You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable." And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event.
Persons: DeepMind's Mustafa Suleyman, Mustafa Suleyman, Suleyman, there's, Sam Altman, Elon Musk, Mark Zuckerberg, — Suleyman, Pi, Hassabis, Satya Nadella, Geoffrey Hinton, Yoshua Organizations: MIT Tech, Service, MIT Technology, AIs, Life Institute Locations: Wall, Silicon, Washington
Over the course of three conversations this summer, Acemoglu told me he's worried we're currently hurtling down a road that will end in catastrophe. "There's a fair likelihood that if we don't do a course correction, we're going to have a truly two-tier system," Acemoglu told me. "I was following the canon of economic models, and in all of these models, technological change is the main mover of GDP per capita and wages," Acemoglu told me. In later empirical work, Acemoglu and Restrepo showed that that was exactly what had happened. "I realize this is a very, very tall order," Acemoglu told me.
Persons: who's, Katya Klinova, Daron Acemoglu, Simon Johnson, Acemoglu, Johnson, we've, he's, we're, Power, James Robinson, , Robinson, David Autor, Pascual Restrepo, Restrepo, John Maynard Keynes, Simon Simard, Lord Byron, Eric Van Den Brulle, hasn't, it's, Gita Gopinath, Paul Romer, Romer, What's, Daron, GPT, Asu Ozdaglar, It's, Mark Madeo, Tattong, Erik Brynjolfsson, Brynjolfsson, There's, Yoshua Bengio, Yuval Noah Harari, Andrew Yang, Elon Musk, I've, That's, Aki Ito Organizations: Getty, MIT, of Technology, Hulton, London School of Economics, Stagecoach, Technology, , International Monetary Fund, Microsoft, Asu, Companies, Computer, Greenpeace, Communications, Big Tech, Workers Locations: Silicon Valley, America, Boston, Istanbul, Turkey, Acemoglu, England, United States, Britain, Australia
Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI." Geoffrey Hinton, a trailblazer in the AI field, recently quit his job at Google and said he regrets the role he played in developing the technology. Hinton also worked at Google for over a decade, but Hinton quit his role at Google this past spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.
Persons: Geoffrey Hinton, Noah Berger, Yann LeCun, Bengio, Hinton, He's Organizations: University of Toronto, Google, Associated Press
The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society. Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry. “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.
Persons: Biden, , Brad Smith, Dario Amodei, Bengio, ” Amodei, Amodei, Chuck Schumer, Schumer Organizations: CNN, Frontier Model, Google, Microsoft, US, European Union, Amazon, Meta, Companies, European
WASHINGTON, July 18 (Reuters) - Artificial intelligence startup Anthropic's CEO Dario Amodei will testify on July 25 at a U.S. Senate hearing on artificial intelligence as lawmakers consider potential regulations for the fast-growing technology, the Senate panel scheduling the hearing said on Tuesday. "It’s our obligation to address AI’s potential threats and risks before they become real," said Democratic Senator Richard Blumenthal, the subcommittee chair. "We are on the verge of a new era, with major consequences for workers, consumer privacy, and our society." President Joe Biden met with the CEOs of top artificial intelligence companies in May, including Amodei, and made clear they must ensure their products are safe before they are deployed. The report would help push federal financial regulators to adopt and adapt to AI changes disrupting the industry, Schumer's office said.
Persons: Dario Amodei, Amodei, Yoshua Bengio, Stuart Russell, Richard Blumenthal, Josh Hawley, Joe Biden, Chuck Schumer, David Shepardson, Leslie Adler, Chris Reese Organizations: U.S, Senate, Privacy, Technology, Google, Democratic, Republican, Thomson
"High level, we want this to become something like your personal AI friend," said developer Div Garg, whose company MultiOn is beta-testing an AI agent. The race towards increasingly autonomous AI agents has been supercharged by the March release of GPT-4 by developer OpenAI, a powerful upgrade of the model behind ChatGPT - the chatbot that became a sensation when released last November. GPT-4 facilitates the type of strategic and adaptable thinking required to navigate the unpredictable real world, said Vivian Cheng, an investor at venture capital firm CRV who has a focus on AI agents. OpenAI itself is very interested in AI agent technology, according to four people briefed on its plans. There are at least 100 serious projects working to commercialize agents, said Matt Schlicht, who writes a newsletter on AI.
Persons: Siri, Alexa, Tony Stark's, Kanjun Qiu, Reid Hoffman, Mustafa Suleyman, Qiu, OpenAI, Vivian Cheng, CRV, Aravind Srinivas, Jarvis, Yoshua Bengio, Satya Nadella, Apple's Siri, it's, Google, Edward Grefenstette, Jason Franklin, WVV Capital, Hesam Motlagh, Matt Schlicht, Anna Tong, Jeffrey Dastin, Kenneth Li Organizations: Microsoft, Google, U.S . Federal Trade Commission, Reuters, FTC, OpenAI's, Financial Times, Amazon, Alexa, Investors, WVV, Google Ventures, Entrepreneurs, Thomson Locations: Silicon, Jarvis, GPT, Cognosys, San Francisco, Palo Alto
July 12 (Reuters) - Chip designer Nvidia (NVDA.O) will invest $50 million to speed up training of Recursion's (RXRX.O) artificial intelligence models for drug discovery, the companies said on Wednesday, sending the biotech firm's shares surging about 62%. Recursion, whose advisers include AI pioneer Yoshua Bengio, will use its biological and chemical datasets exceeding 23,000 terabytes to train AI models on Nvidia's cloud platform. Nvidia, seen as a big winner of the boom in artificial intelligence, could then license those models to biotech firms through BioNeMo, a generative AI cloud service for drug discovery that it rolled out earlier this year. The investment comes as Recursion strengthened its AI focus in May by snapping up two companies in the AI-driven drug discovery space for $87.5 million. The Salt Lake City, Utah-based company's current partners include Bayer (BAYGn.DE) and Roche (ROG.S).
Persons: Nvidia, Roche, Mubadala, Baillie Gifford, Chavi Mehta, Stephen Nellis, Mariam Sunny, Shilpi Majumdar, Sriraj Organizations: Nvidia, Bayer, Baillie Gifford & Co, Thomson Locations: BioNeMo, Salt Lake City , Utah, Abu, Bengaluru, San Francisco
STOCKHOLM, June 30 (Reuters) - The proposed EU Artificial Intelligence legislation would jeopardise Europe's competitiveness and technological sovereignty, according to an open letter signed by more than 160 executives at companies ranging from Renault (RENA.PA) to Meta (META.O). EU lawmakers agreed to a set of draft rules this month where systems like ChatGPT would have to disclose AI-generated content, help distinguish so-called deep-fake images from real ones and ensure safeguards against illegal content. Since ChatGPT became popular, several open letters have been issued calling for regulation of AI and raising the "risk of extinction from AI." The third, Yann LeCun, who works at Meta, signed Friday's letter challenging the EU regulations. The letter warned that under the proposed EU rules technologies like generative AI would become heavily regulated and companies developing such systems would face high compliance costs and disproportionate liability risks.
Persons: ChatGPT, Elon Musk, Sam Altman, Geoffrey Hinton, Yoshua, Yann LeCun, OpenAI's Altman, Supantha Mukherjee, Jamie Freed Organizations: EU Artificial Intelligence, Renault, EU, Meta, Spanish, Thomson Locations: STOCKHOLM, French, Europe, Stockholm
Yann LeCun says concerns that AI could pose a threat to humanity are "preposterously ridiculous." He was part of a team that won the Turing Award in 2018 for breakthroughs in machine learning. An AI expert has said concerns that the technology could pose a threat to humanity are "preposterously ridiculous." Marc Andreessen warned against "full-blown moral panic about AI" and said that people have a "moral obligation" to encourage its development. He added that concerns about AI were overstated and if people realized the technology wasn't safe they shouldn't build it, per BBC News.
Persons: Yann LeCun, Yoshua Bengio, Geoffrey Hinton, LeCun, Bing, DALL, Bengio, Elon Musk, Steve Wozniak, Bill Gates, Marc Andreessen Organizations: BBC News, BBC, Apple, Center, AI Safety, Yale's, Leadership Institute, CNN Locations: Paris
There's a chance that AI development could get "catastrophic," Yoshua Bengio told The New York Times. "Today's systems are not anywhere close to posing an existential risk," but they could in the future, he said. "Today's systems are not anywhere close to posing an existential risk," Yoshua Bengio, a professor at the Université de Montréal, told the publication. Marc Andreessen spoke even more strongly in a blog post last week in which he warned against "full-blown moral panic about AI" and described "AI risk doomers" as a "cult." "AI doesn't want, it doesn't have goals, it doesn't want to kill you, because it's not alive," he wrote.
Persons: There's, Yoshua Bengio, there's, Montréal, Bengio, Anthony Aguirre, Microsoft Bing, It's, Aguirre, Elon Musk, Steve Wozniak, Anthropic, Eric Schmidt, Bill Gates, Marc Andreessen, it's, Andreessen Organizations: New York Times, Morning, University of California, Times, Microsoft, Life Institute, Bengio, Apple, Center, AI Safety Locations: Santa Cruz
Yoshua Bengio is one of three AI "godfathers" who won the Turing Prize for breakthroughs in 2018. He told the BBC that he would've prioritized safety if he'd known how quickly AI would progress. A professor known as one of three AI "godfathers" told the BBC that he felt "lost" over his life's work. "We also need the people who are close to these systems to have a kind of certification," Bengio told the broadcaster. On Tuesday, he signed a statement issued by the Center for AI Safety, which warns the technology poses an "extinction" risk comparable to nuclear war.
Persons: Yoshua Bengio, Geoffrey Hinton, Yann LeCun, ChatGPT, Sam Altman, Bengio, That's, Altman, Hinton, he's, LeCun, Organizations: BBC, Morning, Center, AI Safety, Google, New York Times Locations: Hinton
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft (MSFT.O) and Google (GOOGL.O). Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April. AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change. Last week OpenAI CEO Sam Altman referred to EU AI - the first efforts to create a regulation for AI - as over-regulation and threatened to leave Europe.
The Center for AI Safety's statement compares the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio have also supported the statement. The CEOs of three leading AI companies have signed a statement issued by the Center for AI Safety (CAIS) warning of the "extinction" risk posed by artificial intelligence. Per CAIS, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei have all signed the public statement, which compared the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio are among the statement's signatories, along with executives at Microsoft and Google.
Total: 25